A Google matrix is a particular stochastic matrix that is used by Google's PageRank algorithm. The matrix represents a graph with edges representing links between pages. The rank of each page can be generated iteratively from the Google matrix using the power method. However, in order for the power method to converge, the matrix must be stochastic, irreducible and aperiodic.
Contents |
In order to generate the Google matrix, we must first generate a matrix H representing the relations between pages or nodes.
Assuming there are n pages, we can fill out H by doing the following:
Given H, we can generate G by making H stochastic, irreducible, and aperiodic.
We can first generate the stochastic matrix S from H by adding an edge from every sink state to every other node. In the case where there is only one sink state the matrix S is written as:
where N is the number of nodes.
Then, by creating a relation between nodes without a relation with a factor of , the matrix will become irreducible. By making irreducible, we are also making it aperiodic.
The final Google matrix G can be computed as:
By the construction the sum of all non-negative elements inside each matrix column is equal to unit. If combined with the H computed above and with the assumption of a single sink node , the Google matrix can be written as:
Although G is a dense matrix, it is computable using H which is a sparse matrix. Usually for modern directed networks the matrix H has only about ten nonzero elements in a line, thus only about 10N multiplications are needed to multiply a vector by matrix G[1,2]. An example of the matrix construction for Eq.(1) within a simple network is given in the article CheiRank.
For the actual matrix, Google uses a damping factor around 0.85 [1,2,3]. The term gives a surfer probability to jump randomly on any page. The matrix belongs to the class of Perron-Frobenius operators of Markov chains [1]. The examples of Google matrix structure are shown in Fig.1 for Wikipedia articles hyperlink network in 2009 at small scale and in Fig.2 for University of Cambridge network in 2006 at large scale.
For there is only one maximal eigenvalue with the corresponding right eigenvector which has non-negative elements which can be viewed as stationary probability distribution [1]. These probabilities ordered by their decreasing values give the PageRank vector with the RageRank used by Google search to rank webpages. Usually one has for the World Wide Web that with . The number of nodes with a given PageRank value scales as with the exponent [4,5]. The left eigenvector at has constant matrix elements. With all eigenvalues move as except the maximal eigenvalue , which remains unchanged [1]. The PageRank vector varies with but other eigenvectors with remain unchanged due to their orthogonality to the constant left vector at . The gap between and other eigenvalue is gives a rapid convergence of a random initial vector to the PageRank approximately after 50 multiplications on matrix.
At the matrix has generally many degenerate eigenvalues (see e.g. [6,7]). Examples of the eigenvalue spectrum of the Google matrix of various directed networks is shown in Fig.3 from [14] and Fig.4 from [7].
The Google matrix can be also constructed for the Ulam networks generated by the Ulam method [8] for dynamical maps. The spectral properties of such matrices are discussed in [9,10,11,12,13,14,15,16]. In a number of cases the spectrum is described by the fractal Weyl law [10,12].
The Google matrix can be constructed also for other directed networks, e.g. for the procedure call network of the Linux Kernel software introduced in [15]. In this case the spectrum of is described by the fractal Weyl law with the fractal dimension (see Fig.5 from [16]). Numerical analysis shows that the eigenstates of matrix are localized (see Fig.6 from [16]). Arnoldi iteration method allows to compute many eigenvalues and eigenvectors for matrices of rather large size [13,14,16].
Other examples of matrix include the Google matrix of brain [17] and business process management [18], see also [19].
The Google matrix with damping factor was described by Sergey Brin and Larry Page in 1998 [20], see also articles PageRank and [21],[22].